Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 5.220
1.
J Vis ; 24(5): 1, 2024 May 01.
Article En | MEDLINE | ID: mdl-38691088

Still life paintings comprise a wealth of data on visual perception. Prior work has shown that the color statistics of objects show a marked bias for warm colors. Here, we ask about the relative chromatic contrast of these object-associated colors compared with background colors in still life paintings. We reasoned that, owing to the memory color effect, where the color of familiar objects is perceived more saturated, warm colors will be relatively more saturated than cool colors in still life paintings as compared with photographs. We analyzed color in 108 slides of still life paintings of fruit from the teaching slide collection of the Fogg University Art Museum and 41 color-calibrated photographs of fruit from the McGill data set. The results show that the relatively higher chromatic contrast of warm colors was greater for paintings compared with photographs, consistent with the hypothesis.


Color Perception , Fruit , Paintings , Photography , Humans , Color Perception/physiology , Photography/methods , Color , Contrast Sensitivity/physiology
2.
Sensors (Basel) ; 24(9)2024 Apr 26.
Article En | MEDLINE | ID: mdl-38732872

This paper presents an experimental evaluation of a wearable light-emitting diode (LED) transmitter in an optical camera communications (OCC) system. The evaluation is conducted under conditions of controlled user movement during indoor physical exercise, encompassing both mild and intense exercise scenarios. We introduce an image processing algorithm designed to identify a template signal transmitted by the LED and detected within the image. To enhance this process, we utilize the dynamics of controlled exercise-induced motion to limit the tracking process to a smaller region within the image. We demonstrate the feasibility of detecting the transmitting source within the frames, and thus limit the tracking process to a smaller region within the image, achieving an reduction of 87.3% for mild exercise and 79.0% for intense exercise.


Algorithms , Exercise , Wearable Electronic Devices , Humans , Exercise/physiology , Image Processing, Computer-Assisted/methods , Photography/instrumentation , Photography/methods , Delivery of Health Care
3.
Meat Sci ; 213: 109500, 2024 Jul.
Article En | MEDLINE | ID: mdl-38582006

The objective of this study was to develop calibration models against rib eye traits and independently validate the precision, accuracy, and repeatability of the Frontmatec Q-FOM™ Beef grading camera in Australian carcasses. This study compiled 12 different research datasets acquired from commercial processing facilities and were comprised of a diverse range of carcass phenotypes, graded by industry identified expert Meat Standards Australia (MSA) graders and sampled for chemical intramuscular fat (IMF%). Calibration performance was maintained when the device was independently validated. For continuous traits, the Q-FOM™ demonstrated precise (root mean squared error of prediction, RMSEP) and accurate (coefficient of determination, R2) prediction of eye muscle area (EMA) (R2 = 0.89, RMSEP = 4.3 cm2, slope = 0.96, bias = 0.7), MSA marbling (R2 = 0.95, RMSEP = 47.2, slope = 0.98, bias = -12.8) and chemical IMF% (R2 = 0.94, RMSEP = 1.56%, slope = 0.96, bias = 0.64). For categorical traits, the Q-FOM™ predicted 61%, 64.3% and 60.8% of AUS-MEAT marbling, meat colour and fat colour scores equivalent, and 95% within ±1 classes of expert grader scores. The Q-FOM™ also demonstrated very high repeatability and reproducibility across all traits.


Adipose Tissue , Color , Muscle, Skeletal , Photography , Red Meat , Animals , Australia , Cattle , Red Meat/analysis , Red Meat/standards , Photography/methods , Calibration , Phenotype , Reproducibility of Results , Ribs
4.
Ann Plast Surg ; 92(4): 367-372, 2024 Apr 01.
Article En | MEDLINE | ID: mdl-38527337

STATEMENT OF THE PROBLEM: Standardized medical photography of the face is a vital part of patient documentation, clinical evaluation, and scholarly dissemination. Because digital photography is a mainstay in clinical care, there is a critical need for an easy-to-use mobile device application that could assist users in taking a standardized clinical photograph. ImageAssist was developed to answer this need. The mobile application is integrated into the electronic medical record (EMR); it implements and automates American Society of Plastic Surgery/Plastic Surgery Research Foundation photographic guidelines with background deletion. INITIAL PRODUCT DEVELOPMENT: A team consisting of a craniofacial plastic surgeon and the Health Information Technology product group developed and implemented the pilot application of ImageAssist. The application launches directly from patients' chart in the mobile version of the EMR, EPIC Haiku (Verona, Wisconsin). Standard views of the face (90-degree, oblique left and right, front and basal view) were built into digital templates and are user selected. Red digital frames overlay the patients' face on the screen and turn green once standardized alignment is achieved, prompting the user to capture. The background is then digitally subtracted to a standard blue, and the photograph is not stored on the user's phone. EARLY USER EXPERIENCE: ImageAssist initial beta user group was limited to 13 providers across dermatology, ENT, and plastic surgery. A mix of physicians, advanced practice providers, and nurses was included to pilot the application in the outpatient clinic setting using Image Assist on their smart phone. After using the app, an internal survey was used to gain feedback on the user experience. In the first 2 years of use, 31 users have taken more than 3400 photographs in more than 800 clinical encounters. Since initial release, automated background deletion also has been functional for any anatomic area. CONCLUSIONS: ImageAssist is a novel smartphone application that standardizes clinical photography and integrated into the EMR, which could save both time and expense for clinicians seeking to take consistent clinical images. Future steps include continued refinement of current image capture functionality and development of a stand-alone mobile device application.


Mobile Applications , Plastic Surgery Procedures , Surgery, Plastic , Humans , United States , Smartphone , Photography/methods
5.
Biomed Eng Online ; 23(1): 32, 2024 Mar 12.
Article En | MEDLINE | ID: mdl-38475784

PURPOSE: This study aimed to investigate the imaging repeatability of self-service fundus photography compared to traditional fundus photography performed by experienced operators. DESIGN: Prospective cross-sectional study. METHODS: In a community-based eye diseases screening site, we recruited 65 eyes (65 participants) from the resident population of Shanghai, China. All participants were devoid of cataract or any other conditions that could potentially compromise the quality of fundus imaging. Participants were categorized into fully self-service fundus photography or traditional fundus photography group. Image quantitative analysis software was used to extract clinically relevant indicators from the fundus images. Finally, a statistical analysis was performed to depict the imaging repeatability of fully self-service fundus photography. RESULTS: There was no statistical difference in the absolute differences, or the extents of variation of the indicators between the two groups. The extents of variation of all the measurement indicators, with the exception of the optic cup area, were below 10% in both groups. The Bland-Altman plots and multivariate analysis results were consistent with results mentioned above. CONCLUSIONS: The image repeatability of fully self-service fundus photography is comparable to that of traditional fundus photography performed by professionals, demonstrating promise in large-scale eye disease screening programs.


Community Health Services , Glaucoma , Humans , Cross-Sectional Studies , Prospective Studies , China , Photography/methods , Fundus Oculi
6.
Burns ; 50(4): 966-979, 2024 May.
Article En | MEDLINE | ID: mdl-38331663

AIM: This study was conducted to determine the segmentation, classification, object detection, and accuracy of skin burn images using artificial intelligence and a mobile application. With this study, individuals were able to determine the degree of burns and see how to intervene through the mobile application. METHODS: This research was conducted between 26.10.2021-01.09.2023. In this study, the dataset was handled in two stages. In the first stage, the open-access dataset was taken from https://universe.roboflow.com/, and the burn images dataset was created. In the second stage, in order to determine the accuracy of the developed system and artificial intelligence model, the patients admitted to the hospital were identified with our own design Burn Wound Detection Android application. RESULTS: In our study, YOLO V7 architecture was used for segmentation, classification, and object detection. There are 21018 data in this study, and 80% of them are used as training data, and 20% of them are used as test data. The YOLO V7 model achieved a success rate of 75.12% on the test data. The Burn Wound Detection Android mobile application that we developed in the study was used to accurately detect images of individuals. CONCLUSION: In this study, skin burn images were segmented, classified, object detected, and a mobile application was developed using artificial intelligence. First aid is crucial in burn cases, and it is an important development for public health that people living in the periphery can quickly determine the degree of burn through the mobile application and provide first aid according to the instructions of the mobile application.


Artificial Intelligence , Burns , Mobile Applications , Burns/classification , Burns/diagnostic imaging , Burns/pathology , Humans , Photography/methods
7.
Int Ophthalmol ; 44(1): 41, 2024 Feb 09.
Article En | MEDLINE | ID: mdl-38334896

Diabetic retinopathy (DR) is the leading global cause of vision loss, accounting for 4.8% of global blindness cases as estimated by the World Health Organization (WHO). Fundus photography is crucial in ophthalmology as a diagnostic tool for capturing retinal images. However, resource and infrastructure constraints limit access to traditional tabletop fundus cameras in developing countries. Additionally, these conventional cameras are expensive, bulky, and not easily transportable. In contrast, the newer generation of handheld and smartphone-based fundus cameras offers portability, user-friendliness, and affordability. Despite their potential, there is a lack of comprehensive review studies examining the clinical utilities of these handheld (e.g. Zeiss Visuscout 100, Volk Pictor Plus, Volk Pictor Prestige, Remidio NMFOP, FC161) and smartphone-based (e.g. D-EYE, iExaminer, Peek Retina, Volk iNview, Volk Vistaview, oDocs visoScope, oDocs Nun, oDocs Nun IR) fundus cameras. This review study aims to evaluate the feasibility and practicality of these available handheld and smartphone-based cameras in medical settings, emphasizing their advantages over traditional tabletop fundus cameras. By highlighting various clinical settings and use scenarios, this review aims to fill this gap by evaluating the efficiency, feasibility, cost-effectiveness, and remote capabilities of handheld and smartphone fundus cameras, ultimately enhancing the accessibility of ophthalmic services.


Diabetes Mellitus , Diabetic Retinopathy , Eye Diseases , Humans , Diabetic Retinopathy/diagnosis , Smartphone , Fundus Oculi , Retina , Eye Diseases/diagnosis , Photography/methods , Blindness
9.
Diabetes Care ; 47(2): 304-319, 2024 02 01.
Article En | MEDLINE | ID: mdl-38241500

BACKGROUND: Diabetic macular edema (DME) is the leading cause of vision loss in people with diabetes. Application of artificial intelligence (AI) in interpreting fundus photography (FP) and optical coherence tomography (OCT) images allows prompt detection and intervention. PURPOSE: To evaluate the performance of AI in detecting DME from FP or OCT images and identify potential factors affecting model performances. DATA SOURCES: We searched seven electronic libraries up to 12 February 2023. STUDY SELECTION: We included studies using AI to detect DME from FP or OCT images. DATA EXTRACTION: We extracted study characteristics and performance parameters. DATA SYNTHESIS: Fifty-three studies were included in the meta-analysis. FP-based algorithms of 25 studies yielded pooled area under the receiver operating characteristic curve (AUROC), sensitivity, and specificity of 0.964, 92.6%, and 91.1%, respectively. OCT-based algorithms of 28 studies yielded pooled AUROC, sensitivity, and specificity of 0.985, 95.9%, and 97.9%, respectively. Potential factors improving model performance included deep learning techniques, larger size, and more diversity in training data sets. Models demonstrated better performance when validated internally than externally, and those trained with multiple data sets showed better results upon external validation. LIMITATIONS: Analyses were limited by unstandardized algorithm outcomes and insufficient data in patient demographics, OCT volumetric scans, and external validation. CONCLUSIONS: This meta-analysis demonstrates satisfactory performance of AI in detecting DME from FP or OCT images. External validation is warranted for future studies to evaluate model generalizability. Further investigations may estimate optimal sample size, effect of class balance, patient demographics, and additional benefits of OCT volumetric scans.


Diabetes Mellitus , Diabetic Retinopathy , Macular Edema , Humans , Diabetic Retinopathy/diagnostic imaging , Diabetic Retinopathy/complications , Macular Edema/diagnostic imaging , Macular Edema/etiology , Artificial Intelligence , Tomography, Optical Coherence/methods , Photography/methods
10.
IEEE Trans Med Imaging ; 43(5): 1945-1957, 2024 May.
Article En | MEDLINE | ID: mdl-38206778

Color fundus photography (CFP) and Optical coherence tomography (OCT) images are two of the most widely used modalities in the clinical diagnosis and management of retinal diseases. Despite the widespread use of multimodal imaging in clinical practice, few methods for automated diagnosis of eye diseases utilize correlated and complementary information from multiple modalities effectively. This paper explores how to leverage the information from CFP and OCT images to improve the automated diagnosis of retinal diseases. We propose a novel multimodal learning method, named geometric correspondence-based multimodal learning network (GeCoM-Net), to achieve the fusion of CFP and OCT images. Specifically, inspired by clinical observations, we consider the geometric correspondence between the OCT slice and the CFP region to learn the correlated features of the two modalities for robust fusion. Furthermore, we design a new feature selection strategy to extract discriminative OCT representations by automatically selecting the important feature maps from OCT slices. Unlike the existing multimodal learning methods, GeCoM-Net is the first method that formulates the geometric relationships between the OCT slice and the corresponding region of the CFP image explicitly for CFP and OCT fusion. Experiments have been conducted on a large-scale private dataset and a publicly available dataset to evaluate the effectiveness of GeCoM-Net for diagnosing diabetic macular edema (DME), impaired visual acuity (VA) and glaucoma. The empirical results show that our method outperforms the current state-of-the-art multimodal learning methods by improving the AUROC score 0.4%, 1.9% and 2.9% for DME, VA and glaucoma detection, respectively.


Image Interpretation, Computer-Assisted , Multimodal Imaging , Tomography, Optical Coherence , Humans , Tomography, Optical Coherence/methods , Multimodal Imaging/methods , Image Interpretation, Computer-Assisted/methods , Algorithms , Retinal Diseases/diagnostic imaging , Retina/diagnostic imaging , Machine Learning , Photography/methods , Diagnostic Techniques, Ophthalmological , Databases, Factual
11.
Klin Monbl Augenheilkd ; 241(1): 75-83, 2024 Jan.
Article En | MEDLINE | ID: mdl-38242135

Cataract is among the leading causes of visual impairment worldwide. Innovations in treatment have drastically improved patient outcomes, but to be properly implemented, it is necessary to have the right diagnostic tools. This review explores the cataract grading systems developed by researchers in recent decades and provides insight into both merits and limitations. To this day, the gold standard for cataract classification is the Lens Opacity Classification System III. Different cataract features are graded according to standard photographs during slit lamp examination. Although widely used in research, its clinical application is rare, and it is limited by its subjective nature. Meanwhile, recent advancements in imaging technology, notably Scheimpflug imaging and optical coherence tomography, have opened the possibility of objective assessment of lens structure. With the use of automatic lens anatomy detection software, researchers demonstrated a good correlation to functional and surgical metrics such as visual acuity, phacoemulsification energy, and surgical time. The development of deep learning networks has further increased the capability of these grading systems by improving interpretability and increasing robustness when applied to norm-deviating cases. These classification systems, which can be used for both screening and preoperative diagnostics, are of value for targeted prospective studies, but still require implementation and validation in everyday clinical practice.


Cataract , Lens, Crystalline , Phacoemulsification , Humans , Prospective Studies , Photography/methods , Cataract/diagnosis , Visual Acuity , Phacoemulsification/methods
12.
J Biomed Opt ; 29(Suppl 1): S11524, 2024 Jan.
Article En | MEDLINE | ID: mdl-38292055

Significance: Compressed ultrafast photography (CUP) is currently the world's fastest single-shot imaging technique. Through the integration of compressed sensing and streak imaging, CUP can capture a transient event in a single camera exposure with imaging speeds from thousands to trillions of frames per second, at micrometer-level spatial resolutions, and in broad sensing spectral ranges. Aim: This tutorial aims to provide a comprehensive review of CUP in its fundamental methods, system implementations, biomedical applications, and prospect. Approach: A step-by-step guideline to CUP's forward model and representative image reconstruction algorithms is presented with sample codes and illustrations in Matlab and Python. Then, CUP's hardware implementation is described with a focus on the representative techniques, advantages, and limitations of the three key components-the spatial encoder, the temporal shearing unit, and the two-dimensional sensor. Furthermore, four representative biomedical applications enabled by CUP are discussed, followed by the prospect of CUP's technical advancement. Conclusions: CUP has emerged as a state-of-the-art ultrafast imaging technology. Its advanced imaging ability and versatility contribute to unprecedented observations and new applications in biomedicine. CUP holds great promise in improving technical specifications and facilitating the investigation of biomedical processes.


Image Processing, Computer-Assisted , Photography , Photography/methods , Image Processing, Computer-Assisted/methods , Algorithms
13.
Indian J Ophthalmol ; 72(Suppl 2): S280-S296, 2024 Feb 01.
Article En | MEDLINE | ID: mdl-38271424

PURPOSE: To compare the quantification of intraretinal hard exudate (HE) using en face optical coherence tomography (OCT) and fundus photography. METHODS: Consecutive en face images and corresponding fundus photographs from 13 eyes of 10 patients with macular edema associated with diabetic retinopathy or Coats' disease were analyzed using the machine-learning-based image analysis tool, "ilastik." RESULTS: The overall measured HE area was greater with en face images than with fundus photos (en face: 0.49 ± 0.35 mm2 vs. fundus photo: 0.34 ± 0.34 mm2, P < 0.001). However, there was an excellent correlation between the two measurements (intraclass correlation coefficient [ICC] = 0.844). There was a negative correlation between HE area and central macular thickness (CMT) (r = -0.292, P = 0.001). However, HE area showed a positive correlation with CMT in the previous several months, especially in eyes treated with anti-vascular endothelial growth factor (VEGF) therapy (CMT 3 months before: r = 0.349, P = 0.001; CMT 4 months before: r = 0.287, P = 0.012). CONCLUSION: Intraretinal HE can be reliably quantified from either en face OCT images or fundus photography with the aid of an interactive machine learning-based image analysis tool. HE area changes lagged several months behind CMT changes, especially in eyes treated with anti-VEGF injections.


Diabetic Retinopathy , Tomography, Optical Coherence , Humans , Tomography, Optical Coherence/methods , Retrospective Studies , Diagnostic Techniques, Ophthalmological , Diabetic Retinopathy/diagnosis , Diabetic Retinopathy/complications , Photography/methods , Exudates and Transudates/metabolism
14.
BMC Med Inform Decis Mak ; 24(1): 25, 2024 Jan 26.
Article En | MEDLINE | ID: mdl-38273286

BACKGROUND: The epiretinal membrane (ERM) is a common retinal disorder characterized by abnormal fibrocellular tissue at the vitreomacular interface. Most patients with ERM are asymptomatic at early stages. Therefore, screening for ERM will become increasingly important. Despite the high prevalence of ERM, few deep learning studies have investigated ERM detection in the color fundus photography (CFP) domain. In this study, we built a generative model to enhance ERM detection performance in the CFP. METHODS: This deep learning study retrospectively collected 302 ERM and 1,250 healthy CFP data points from a healthcare center. The generative model using StyleGAN2 was trained using single-center data. EfficientNetB0 with StyleGAN2-based augmentation was validated using independent internal single-center data and external datasets. We randomly assigned healthcare center data to the development (80%) and internal validation (20%) datasets. Data from two publicly accessible sources were used as external validation datasets. RESULTS: StyleGAN2 facilitated realistic CFP synthesis with the characteristic cellophane reflex features of the ERM. The proposed method with StyleGAN2-based augmentation outperformed the typical transfer learning without a generative adversarial network. The proposed model achieved an area under the receiver operating characteristic (AUC) curve of 0.926 for internal validation. AUCs of 0.951 and 0.914 were obtained for the two external validation datasets. Compared with the deep learning model without augmentation, StyleGAN2-based augmentation improved the detection performance and contributed to the focus on the location of the ERM. CONCLUSIONS: We proposed an ERM detection model by synthesizing realistic CFP images with the pathological features of ERM through generative deep learning. We believe that our deep learning framework will help achieve a more accurate detection of ERM in a limited data setting.


Deep Learning , Epiretinal Membrane , Humans , Epiretinal Membrane/diagnostic imaging , Retrospective Studies , Diagnostic Techniques, Ophthalmological , Photography/methods
15.
Vasc Med ; 29(2): 215-222, 2024 Apr.
Article En | MEDLINE | ID: mdl-38054219

This study aimed to review the current literature exploring the utility of noninvasive ocular imaging for the diagnosis of peripheral artery disease (PAD). Our search was conducted in early April 2022 and included the databases Medline, Scopus, Embase, Cochrane, and others. Five articles were included in the final review. Of the five studies that used ocular imaging in PAD, two studies used retinal color fundus photography, one used optical coherence tomography (OCT), and two used optical coherence tomography angiography (OCTA) to assess the ocular changes in PAD. PAD was associated with both structural and functional changes in the retina. Structural alterations around the optic disc and temporal retinal vascular arcades were seen in color fundus photography of patients with PAD compared to healthy individuals. The presence of retinal hemorrhages, exudates, and microaneurysms in color fundus photography was associated with an increased future risk of PAD, especially the severe form of the disease. The retinal nerve fiber layer (RNFL) was significantly thinner in the nasal quadrant in patients with PAD compared to age-matched healthy individuals in OCT. Similarly, the choroidal thickness in the subfoveal region was significantly thinner in patients with PAD compared to controls. Patients with PAD also had a significant reduction in the retinal and choroidal circulation in OCTA compared to healthy controls. As PAD causes thinning and ischemic changes in retinal vessels, examination of the retinal vessels using retinal imaging techniques can provide useful information about early microvascular damage in PAD. Ocular imaging could potentially serve as a biomarker for PAD. PROSPERO ID: CRD42022310637.


Optic Disk , Peripheral Arterial Disease , Humans , Tomography, Optical Coherence/methods , Photography/methods , Peripheral Arterial Disease/diagnostic imaging , Biomarkers , Retinal Vessels/diagnostic imaging
16.
Community Ment Health J ; 60(3): 457-469, 2024 04.
Article En | MEDLINE | ID: mdl-37874437

The importance of community involvement for both older adults and individuals coping with mental illness is well documented. Yet, barriers to community integration for adults with mental illness such as social stigma, discrimination, and economic marginalization are often exacerbated by increased health and mobility challenges among older adults. Using photovoice, nine older adults with mental illness represented their views of community in photographs and group discussions over a six-week period. Participant themes of community life included physical spaces, valued social roles, and access to resources in the community. Themes were anchored by older adults' perceptions of historical and cultural time comparisons between 'how things used to be' and 'how things are now.' Barriers to community integration were often related to factors such as age, mobility, and resources rather than to mental health status. Program evaluation results suggest photovoice can promote self-reflection, learning, and collaboration among older adults with mental illness.


Mental Disorders , Photography , Humans , Aged , Photography/methods , Social Stigma , Mental Disorders/psychology , Coping Skills , Learning
18.
BMJ Open Ophthalmol ; 8(1)2023 12 06.
Article En | MEDLINE | ID: mdl-38057106

OBJECTIVE: To develop and validate an explainable artificial intelligence (AI) model for detecting geographic atrophy (GA) via colour retinal photographs. METHODS AND ANALYSIS: We conducted a prospective study where colour fundus images were collected from healthy individuals and patients with retinal diseases using an automated imaging system. All images were categorised into three classes: healthy, GA and other retinal diseases, by two experienced retinologists. Simultaneously, an explainable learning model using class activation mapping techniques categorised each image into one of the three classes. The AI system's performance was then compared with manual evaluations. RESULTS: A total of 540 colour retinal photographs were collected. Data was divided such that 300 images from each class trained the AI model, 120 for validation and 120 for performance testing. In distinguishing between GA and healthy eyes, the model demonstrated a sensitivity of 100%, specificity of 97.5% and an overall diagnostic accuracy of 98.4%. Performance metrics like area under the receiver operating characteristic (AUC-ROC, 0.988) and the precision-recall (AUC-PR, 0.952) curves reinforced the model's robust achievement. When differentiating GA from other retinal conditions, the model preserved a diagnostic accuracy of 96.8%, a precision of 90.9% and a recall of 100%, leading to an F1-score of 0.952. The AUC-ROC and AUC-PR scores were 0.975 and 0.909, respectively. CONCLUSIONS: Our explainable AI model exhibits excellent performance in detecting GA using colour retinal images. With its high sensitivity, specificity and overall diagnostic accuracy, the AI model stands as a powerful tool for the automated diagnosis of GA.


Artificial Intelligence , Geographic Atrophy , Humans , Geographic Atrophy/diagnosis , Prospective Studies , Color , Photography/methods
19.
Comput Biol Med ; 167: 107616, 2023 12.
Article En | MEDLINE | ID: mdl-37922601

Age-related macular degeneration (AMD) is a leading cause of vision loss in the elderly, highlighting the need for early and accurate detection. In this study, we proposed DeepDrAMD, a hierarchical vision transformer-based deep learning model that integrates data augmentation techniques and SwinTransformer, to detect AMD and distinguish between different subtypes using color fundus photographs (CFPs). The DeepDrAMD was trained on the in-house WMUEH training set and achieved high performance in AMD detection with an AUC of 98.76% in the WMUEH testing set and 96.47% in the independent external Ichallenge-AMD cohort. Furthermore, the DeepDrAMD effectively classified dryAMD and wetAMD, achieving AUCs of 93.46% and 91.55%, respectively, in the WMUEH cohort and another independent external ODIR cohort. Notably, DeepDrAMD excelled at distinguishing between wetAMD subtypes, achieving an AUC of 99.36% in the WMUEH cohort. Comparative analysis revealed that the DeepDrAMD outperformed conventional deep-learning models and expert-level diagnosis. The cost-benefit analysis demonstrated that the DeepDrAMD offers substantial cost savings and efficiency improvements compared to manual reading approaches. Overall, the DeepDrAMD represents a significant advancement in AMD detection and differential diagnosis using CFPs, and has the potential to assist healthcare professionals in informed decision-making, early intervention, and treatment optimization.


Deep Learning , Macular Degeneration , Humans , Aged , Diagnosis, Differential , Macular Degeneration/diagnostic imaging , Diagnostic Techniques, Ophthalmological , Photography/methods
20.
Int Ophthalmol ; 43(12): 4851-4859, 2023 Dec.
Article En | MEDLINE | ID: mdl-37847478

PURPOSE: Early detection and treatment of diabetic retinopathy (DR) are critical for decreasing the risk of vision loss and preventing blindness. Community vision screenings may play an important role, especially in communities at higher risk for diabetes. To address the need for increased DR detection and referrals, we evaluated the use of artificial intelligence (AI) for screening DR. METHODS: Patient images of 124 eyes were obtained using a 45° Canon Non-Mydriatic CR-2 Plus AF retinal camera in the Department of Endocrinology Clinic (Newark, NJ) and in a community screening event (Newark, NJ). Images were initially classified by an onsite grader and uploaded for analysis by EyeArt, a cloud-based AI software developed by Eyenuk (California, USA). The images were also graded by an off-site retina specialist. Using Fleiss kappa analysis, a correlation was investigated between the three grading systems, the AI, onsite grader, and a US board-certified retina specialist, for a diagnosis of DR and referral pattern. RESULTS: The EyeArt results, onsite grader, and the retina specialist had a 79% overall agreement on the diagnosis of DR: 86 eyes with full agreement, 37 eyes with agreement between two graders, 1 eye with full disagreement. The kappa value for concordance on a diagnosis was 0.69 (95% CI 0.61-0.77), indicating substantial agreement. Referral patterns by EyeArt, the onsite grader, and the ophthalmologist had an 85% overall agreement: 96 eyes with full agreement, 28 eyes with disagreement. The kappa value for concordance on "whether to refer" was 0.70 (95% CI 0.60-0.80), indicating substantial agreement. Using the board-certified retina specialist as the gold standard, EyeArt had an 81% accuracy (101/124 eyes) for diagnosis and 83% accuracy (103/124 eyes) in referrals. For referrals, the sensitivity of EyeArt was 74%, specificity was 87%, positive predictive value was 72%, and negative predictive value was 88%. CONCLUSIONS: This retrospective cross-sectional analysis offers insights into use of AI in diabetic screenings and the significant role it will play in automated detection of DR. The EyeArt readings were beneficial with some limitations in a community screening environment. These limitations included a decreased accuracy in the presence of cataracts and the functional cost of EyeArt uploads in a community setting.


Diabetes Mellitus , Diabetic Retinopathy , Humans , Diabetic Retinopathy/diagnosis , Artificial Intelligence , Cross-Sectional Studies , Retrospective Studies , Mass Screening/methods , Photography/methods
...